Goto

Collaborating Authors

 author feedback and meta-review



Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper proposes to use random functions as an approximation to kernel functions and then proposes to do stochastic gradient descent. Convergence rates and generalisation bounds are derived. Experimental results on large datasets are presented. The idea of introducing random functions to approximate kernel functions and then using SGD is very interesting.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

"NIPS Neural Information Processing Systems 8-11th December 2014, Montreal, Canada",,, "Paper ID:","592" "Title:","Inferring synaptic conductances from spike trains with a biophysically inspired point process model" Current Reviews First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors propose a conductance based spiking model (CBSM) that is more biophysically realistic than the currently popular generalized linear model (GLM). Furthermore, the authors present CBSM as a generalization of the GLM and propose a set of constraints that can reduce it to a GLM and a GLM variant that would be as adaptive as the CBSM. The proposed model is an interesting extension to current spiking models in that it is parametrized in a more descriptive way of the spiking process without sacrificing much of the mathematical convenience of the GLM. One thing that could raise some concerns stems from the last paragraph of page 6.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper proposes to use a deep convolutional neural network for denoising images by generating a lot of noisy and noiseless image pairs using a synthetic blurring process. The proposed method achieves good results on a number of image deblurring tasks. The idea is simple and elegant. It is observed that a 2D deconvolution, which is an inverse of the convolution operator, is itself a 2D convolution operator, albeit with a very large support.



Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This is a very well-written paper that explores the use of weighted importance sampling to speed up learning in off-policy LSTD-type algorithms. The theoretical results are solid and what one would expect. The computational results are striking. The technique could serve as a useful component in design of RL algorithms. Q2: Please summarize your review in 1-2 sentences The paper is very well-written and presents a useful idea validated by striking computational results.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

They motivate their approach by first showing that under some assumptions, the discriminant function over a fully connected graph on the labels can be expressed as the uniform expectation of the discriminant functions over random spanning trees. Through a sampling result, they then show that with high probability over samples of random spanning trees, there is a conical combination of these trees which achieve a substantial fraction of the margin of a predictor which uses the complete graph, and then prove a related risk bound for conical combination over random trees. This motivates to optimize the margin for conical combination of trees as predictors, and the author proposes a primal (and dual) formulation for this optimization problem (somewhat analog to the structured SVM), for which a standard dual subgradient method is proposed as in previous work. They then show that the maximizing joint label for the combination of trees (inference) can be done exactly (under an assumption that be checked at run-time) by looking through the K-best list for each spanning tree (the latter can be obtained by dynamic programming, as was already mentioned in Tsochantaridis et al. [JMLR 2005]). Experiments on standard multilabel datasets show a small improvement over alternative methods.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper introduces a full model for tracking while allowing for multiple and varying number of hypothesis and clutter. It promises a clear notation and fast algorithms through the use of variational/Baum-Welch type inference. Experiments appear extensive and are performed on real-world data. The key novelty of this paper is the assignment problem (aka data association). Tracking itself, as the authors acknowledge, is a well-trodden field.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This work develops a new exact algorithm for structure learning of chordal Markov networks (MN) under decomposable score functions. The algorithm implements a dynamic programming approach by introducing recursive partition tree structures, which are junction tree equivalent structures that well suit the decomposition of the problem into smaller instances so to enable dynamic programming. The authors review the literature, prove the correctness of their algorithm and compare it against a modified version of GOBNILP, which is implements an state-of-the-art method for Bayesian network exact structure learning. The paper is well-written, relevant for NIPS and technically sound.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper described a novel method to train convolutional neural networks (CNNs) in an unsupervised fashion such that the model still learns to be invariant to common transformations. The method proposed by the authors is simple and, some extent, elegant. The idea is to simply train the CNN to distinguish image patches and their transformations from other image patches and their transformations. This approach allows the model to learn to be invariant to common transformation. However, as the authors mention, the method is vulnerable to collisions where distinct image patches -- that the model will try to distinguish -- share the same content.